Cloud Computing: Power, Costs & market trends
Observation of the Cloud Provider market and the market of server, networking and storage equipment, makes us reflect on the farsightedness of some strategies.
The well-known Moore’s Law: “The number of transistors per chip doubles every 18 months”,
which corresponds to an increase of about 60% in the RAM and CPU of our servers, has accustomed us to the obsolescence of our personal computers, as well as to the reduced amortization times of server equipment.
Before the virtualization revolution, the CPU industry pushed the clock frequency of CPUs to the limits, while now it is oriented towards parallelism, more and more on multi-core.
The great diffusion that public Cloud Computing (we mean the large cloud providers), has marked and marks the new revolution, the new paradigm that transforms Capex into Opex reducing expenses all to the advantage of efficiencies, development times, redundancies, reliability, etc., has revolutionized and temporarily put in crisis the large HW industry (Dell, IBM, HP, Cisco – a crisis that was also reported by Gartner) that have seen the sales of equipment decrease, as SMEs attentive to innovations and capital expenditures, have preferred to start their business in a more elastic way, at the cost of consumption. The big majors continued to sell, but only and above all to the large datacenters. They thus found themselves having full warehouses and some sooner or later they oriented themselves towards the reuse of HW to offer Cloud services themselves.
We remember IBM’s large investments in various datacenters intended for Cloud services, the large research centers that have sprung up in various parts of the world precisely and exclusively for the Cloud, we remember HP that with Yahoo has been investing in Cloud projects for several years, even HP was among the first with CERN to work on paradigms very close to those that later became Cloud Computing. In short, the market has marked the choices of the big names, also and above all in terms of acquisitions of smaller but innovative companies, Joint Ventures and everything else we have seen and recorded so far.
But what do we notice that does not follow the market and that sooner or later may have consequences, if we are not already seeing it?
Cloud Providers, the sale of computational services on a pay-as-you-go basis, where there has never been a unit of measurement, a standard of comparison, where it is often the various services that make the difference rather than the power of the HW, these have the fixed prices of their virtual server instances.
If I want to buy a physical server or I want to rent it from a classic service provider, this generally provides me with the latest configuration, the most modern and therefore also the most powerful, while a Cloud Provider currently provides me with not too precise general indications of the power of my virtual machine. Obviously, the comparison is complex to make for a thousand reasons, but care must be taken as the offer of creating Private Clouds both from an OpenSource point of view and from the point of view of proprietary offers is increasingly competitive.
In addition, a Cloud Provider that is born today with new equipment certainly has more unit power than an Amazon if we compare the same sales yardstick as Amazon.
Gartner in the latest analysis notes that the sale of HW is recovering, precisely because the surplus of supply has lowered its prices compared to the costs of Cloud Providers, despite the increasing presence of competitors of the big AmazonWebServices.
But now the revolution of virtualization and Cloud Computing cannot be stopped and the transformation of everything we knew as datacenter administration will turn more and more towards these logics, but the return to the sale of HW would suggest a mandatory evolution towards the Hybrid Cloud.
That is, what we will see will be a bit like the vision of vmware (the Federated Clouds) or that of the various clones of AWS (Eucalyptus), many Public and Private Clouds linked by Hybrid solutions.
Obviously, this will give space and breath to datacenters specialized only in co-location, continuity in bandwidth, current and perimeter security, as it was unthinkable to be able to keep a complex infrastructure in-house, it will be even more unthinkable to be able to keep a Private Cloud infrastructure that must guarantee continuous connectivity with the Public Cloud in-house.